Notes � Minsky, �Why people think that computers can�t�, 1982

Greg Detre

Friday, 01 December, 2000

 

Introduction

machines are capable of thought-like activity = �superficial imitation of human intelligence�?

Can machines be creative?

highly creative thought isn�t all that different to ordinary creative thought � and we have no idea where that comes from, but we might one day

genius is: intense concern and great proficiency in some subject, the confidence + stubbornness to stand against the scorn of peers and some common sense

geniuses have probably learnt to manage what they learn extremely well �/span> exponential learning growth

unfortunately, we�ve probably evolved in the opposite direction, since it would be maladaptive if everyone in a culture found different ways to think, so the genes for genius are being frequently weeded out

I suppose there must be a balance between the adaptivity of having the odd genius against us all being stubborn and doing things our own way

Problem solving

machines need to learn to do basic things before they can be expected to do amazing things

�do now� programming, e.g. Basic (simple loops + lists, etc.)

the General Problem Solver by Newell, Shaw + Simon in the 1950s attempted to reach a goal by applying a method to reduce the distance from its current position ("means-ends" and "do if needed" programming methods). but often, they�d end up on the smaller peaks, without being able to find the mountain (i.e. the best solution).

AI is now more focused on �global� methods, taking larger views and planning ahead

"do something sensible" programming � learning and reasoning by analogy to memories of past problems

Can computers understand?

Daniel Bobrow wrote �STUDENT� (1965) to solve secondary school level maths questions like:

Bill's father's uncle is twice as old as Bill's father. Two years from now I Bill's father will be three times as old as Bill. The sum of their ages is 92. Find Bill's age.

Using a few hundred word tricks, it could usually translate these formulaic English sentences into equations it could solve. Did STUDENT understand the sentences?

questions about �meaning� and �understanding� are deceptive. Minsky uses the example of numbers as a case where we think we understand numbers, where they have meaning for us � yet, we�re unable to pinpoint it. Russell and Whitehead�s attempts at defining �five� as the �set of all possible sets with five members� ran into paradox and inconsistency (though they were able to get around the problem to a certain degree). Yet, despite being quite Platonic, they are still somewhat counter-intuitive. Moreover, according to Minsky, they are an attempt to avoid the truth that meaning is relative � it derives from relations with other things I know. To try and define a meaning in isolation, rather than as a piece of a jigsaw of knowledge, is doomed. coherentism

he argues that we could build machines that aren�t based on rigid definitions, without being any more inconsistent and dysfunctional than we are.

Webs of meaning

But just because our understanding of meaning is a fantastically complex web doesn�t mean that science cannot study it. Though there may be circularities in the foudations of meaning, science can develop theories about those circles.

When talking about three, we can count out loud, count as we pick up the objects, count on our fingers, count in our heads, match them against another set of three things. With five we can also divide into groups of two and three or one and four, or arrange the objects into familiar shapes. The meaning of three, or five, is based on all of these methods, not a single one on its own. In different situations, we can use different methods (multiply-connected knowledge nets). This fragility is fine in mathematics because it means that they can be sure that inconsistencies will be immediately apparent, but it�s too brittle for a working mind. These slender tower chains make things difficult for children, who find things easiest by forming robust cross-connected networks of understanding.

Castles in the air

The more links a concept has, the more meaning � hence the futility of searching for the �real� meaning of something, as though there it was only connected to one other thing, then it would scarcely �mean� at all.

That�s why machines shouldn�t be programmed with single, simple logic definitions � a machine programmed that why would never �understand� anything.

Rich, multiply-connected networks provide enough different ways to use knowledge that when one way doesn't work, you can try to figure out why. When there are many meanings in a network, you can turn things around in your mind and look at them from different perspectives; when you get stuck, you can try another view. That's what we mean by thinking!

Hence, in order to have a thinking machine, we have to look at a thinking mind, and teach it the same way � multi-woven strands.

Are humans self-aware?

Some people say that computers can never be self-aware, i.e. know their own minds, though they might be able to simulate self-awareness. But can we really say that we, as humans, are self-aware? No, since we have very little knowledge of the workings of our own brain-minds. Rather, we seem to construct our own theories about what�s going on inside our heads.

We all like to subscribe to a Single Agent theory, a self/Cartesian Theatre deep inside that takes over to represent the vibrations in the air that we process in our brains as words and sentences, for example. It is the self that does the hard work of understanding and deciding, the self is the moral agent that we need to exist for our folk psychology and moral approach to work.

But scientists have to realise that sometimes what we see as single things, like rocks, clouds and even minds, are sometimes be described as comprised of other kinds of things. The Self itself is not a single thing.

New theories about minds and machines

Computers have no feelings or thoughts, because all they do is �execute a series of incredibly intricate processes, perhaps millions at a time� � yet we know as little about the limits of computers as we do about the workings of our own minds.

To understand minds, we will have to learn a great deal about the workings of complex machines � an attempt at a complete theory of mind without a great deal of knowledge about machines seems pointlessly hopeless, unless you mistakenly think that mind is not very complex thing.

In order to make intelligent machines, we need better theories of how to �represent�, which will require building the sort of webs of knowledge and knowhow that we rely on � common sense.

Rather than large, useful but shallow knowledge (i.e. expert systems), we need to look at deep, versatile knowledge, and the ability to learn from experience. Then we�ll get machines that make up theories and think about themselves and their own workings. We�ll probably be able to tell easily when we reach that point � for starters, they�ll probably object to being called �machines�.

Knowledge and common sense

We laugh at the idea of intelligent machines because they�re so slavishly obedient, and they do things that seem so obviously dumb. Yet, it�s odd when you think that computers have excelled at advanced subjects (a 1961 program written by James Slagle could solve calculus problems at the level of college students) and struggled with �easy� ones. This could be because it�s much easier to do difficult, expert things, than to discover or learn things in the first place. We are much better at writing �expert� systems, than programs with common sense. In order to learn intelligently, machines will have to (like us) be more reflective, attempt to identify causes and explanations, to relate and add things to the network of current understanding.

Unconscious fears and phobias

Learning what is wrong may be more important than learning what is right. Knowing the wrong thing from past experience may guide us subconsciously in the future. Ideas and lines of thought are what remains after all the alternatives have been invisibly/unconsciously pruned off, simply because if we were to think about all of them, we�d be unable to think about anything at all. The conscious mind would be like an executive who doesn�t want to be burdened with every decision, but only the summaries and critical conclusions from the other smaller parts of mind that know much more about much less.

Self-conscious computers

Then might it not be possible to create machines to be more self-conscious than us? If we want machines to be versatile and exercise good judgement, we may have to give them self-insight, even though that may prove unwise. Once they can rewire themselves, they will not be constrained as we are by the wiring of our brains.

If we can, then should we build machines that have richer mental lives, and are �better� than us?

As we learn more about AI, we will learn more about our own mental processes and about thinking and feeling � they will no longer be mysterious, but rather complex though comprehensible webs of ways to represent and use ideas � and this will give us new ideas. We cannot claim to be able to differentiate the minds of men and possible machines.